|
In statistics, the Kolmogorov–Smirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution (one-sample K–S test), or to compare two samples (two-sample K–S test). The Kolmogorov–Smirnov statistic quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. The null distribution of this statistic is calculated under the null hypothesis that the samples are drawn from the same distribution (in the two-sample case) or that the sample is drawn from the reference distribution (in the one-sample case). In each case, the distributions considered under the null hypothesis are continuous distributions but are otherwise unrestricted. The two-sample K–S test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples. The Kolmogorov–Smirnov test can be modified to serve as a goodness of fit test. In the special case of testing for normality of the distribution, samples are standardized and compared with a standard normal distribution. This is equivalent to setting the mean and variance of the reference distribution equal to the sample estimates, and it is known that using these to define the specific reference distribution changes the null distribution of the test statistic: see below. Various studies have found that, even in this corrected form, the test is less powerful for testing normality than the Shapiro–Wilk test or Anderson–Darling test. However, other tests have their own disadvantages. For instance the Shapiro–Wilk test is known not to work well with many ties (many identical values). ==Kolmogorov–Smirnov statistic== The empirical distribution function ''F''''n'' for ''n'' iid observations ''Xi'' is defined as : where is the indicator function, equal to 1 if and equal to 0 otherwise. The Kolmogorov–Smirnov statistic for a given cumulative distribution function ''F''(''x'') is : where ''sup x'' is the supremum of the set of distances. By the Glivenko–Cantelli theorem, if the sample comes from distribution ''F''(''x''), then ''D''''n'' converges to 0 almost surely in the limit when goes to infinity. Kolmogorov strengthened this result, by effectively providing the rate of this convergence (see below). Donsker's theorem provides yet a stronger result. In practice, the statistic requires a relatively large number of data points to properly reject the null hypothesis. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Kolmogorov–Smirnov test」の詳細全文を読む スポンサード リンク
|